1,984 research outputs found

    A Big Data Architecture for Log Data Storage and Analysis

    Full text link
    We propose an architecture for analysing database connection logs across different instances of databases within an intranet comprising over 10,000 users and associated devices. Our system uses Flume agents to send notifications to a Hadoop Distributed File System for long-term storage and ElasticSearch and Kibana for short-term visualisation, effectively creating a data lake for the extraction of log data. We adopt machine learning models with an ensemble of approaches to filter and process the indicators within the data and aim to predict anomalies or outliers using feature vectors built from this log data

    Refactoring preserves security

    Get PDF
    Refactoring allows changing a program without changing its behaviour from an observer’s point of view. To what extent does this invariant of behaviour also preserve security? We show that a program remains secure under refactoring. As a foundation, we use the Decentralized Label Model (DLM) for specifying secure information flows of programs and transition system models for their observable behaviour. On this basis, we provide a bisimulation based formal definition of refactoring and show its correspondence to the formal notion of information flow security (noninterference). This permits us to show security of refactoring patterns that have already been practically explored

    Using theorem provers to increase the precision of dependence analysis for information flow control

    Get PDF
    Information flow control (IFC) is a category of techniques for enforcing information flow properties. In this paper we present the Combined Approach, a novel IFC technique that combines a scalable system-dependence-graph-based (SDG-based) approach with a precise logic-based approach based on a theorem prover. The Combined Approach has an increased precision compared with the SDG-based approach on its own, without sacrificing its scalability. For every potential illegal information flow reported by the SDG-based approach, the Combined Approach automatically generates proof obligations that, if valid, prove that there is no program path for which the reported information flow can happen. These proof obligations are then relayed to the logic-based approach. We also show how the SDG-based approach can provide additional information to the theorem prover that helps decrease the verification effort. Moreover, we present a prototypical implementation of the Combined Approach that uses the tools JOANA and KeY as the SDG-based and logic-based approach respectively

    Integration of Static and Dynamic Analysis Techniques for Checking Noninterference

    Get PDF
    In this article, we present an overview of recent combinations of deductive program verification and automatic test generation on the one hand and static analysis on the other hand, with the goal of checking noninterference. Noninterference is the non-functional property that certain confidential information cannot leak to certain public output, i.e., the confidentiality of that information is always preserved. We define the noninterference properties that are checked along with the individual approaches that we use in different combinations. In one use case, our framework for checking noninterference employs deductive verification to automatically generate tests for noninterference violations with an improved test coverage. In another use case, the framework provides two combinations of deductive verification with static analysis based on system dependence graphs to prove noninterference, thereby reducing the effort for deductive verification

    Physicochemical behaviour of a dinuclear uranyl complex formed with an octaphosphinoylated para-tert-butylcalix[8]arene. Spectroscopic studies in solution and in the solid state

    Get PDF
    Spectrophotometric titrations of an octaphosphinoylated para-tert-butylcalix[8]arene (B8bL8) by uranyl nitrate and vice versa in anhydrous ethanol indicate that the species with 2:1 (uranyl:calixarene) stoichiometry is the major complex in solution. Based on these results, a synthesis route was designed to isolate this complex. The latter is an orange, non-hygroscopic polycrystalline powder, with chemical formula [(UO2)2(NO3)4(B8bL8) 2H2O]2(H2O). (Compd. 1), as ascertained by elemental analysis. Spectroscopic characterization of Compd. 1 in the solid and liquid states suggests that a neutral dinuclear uranyl calixarene complex was formed. MIR and FIR spectra indicate that, four phosphinoyl arms of the calixarene and two monodentate nitrates are bound to each 6-coordinate uranyl ion in the complex because no vibrational frequencies from un-coordinated O@P groups or from ionic nitrates are present; in addition the spectra reveal that water molecules form intramolecular hydrogen bonding with monodentate nitrates. The de-convoluted luminescence and XPS spectra obtained in the solid state point to a similar chemical environment around each uranyl ion, as confirmed by the mono-exponential decay of the luminescence. The more rigid conformation acquired by the calixarene in the complex and the non-symmetrical arrangement of the coordinated nitrates result in a particular feature of the emission spectra at 77 K. No evidence of cation-cation interaction was found. A rough approach to the molecular structure of the complex by molecular modelling based on the experimental findings yielded a molecule that was useful for understanding the physicochemical behaviour of Compd. 1.This work was supported by CONACYT [grant Nr. 36689-E], Mexico; and the Swiss National Science Foundation [grant SCOPES 2000–2002: No. 7BUPJ062293.00/1], Switzerland

    Automatic memory-based vertical elasticity and oversubscription on cloud platforms

    Full text link
    Hypervisors and Operating Systems support vertical elasticity techniques such as memory ballooning to dynamically assign the memory of Virtual Machines (VMs). However, current Cloud Management Platforms (CMPs), such as OpenNebula or OpenStack, do not currently support dynamic vertical elasticity. This paper describes a system that integrates with the CMP to provide automatic vertical elasticity to adapt the memory size of the VMs to their current memory consumption, featuring live migration to prevent overload scenarios, without downtime for the VMs. This enables an enhanced VM-per-host consolidation ratio while maintaining the Quality of Service for VMs, since their memory is dynamically increased as necessary. The feasibility of the development is assessed via two case studies based on OpenNebula featuring (i) horizontal and vertical elastic virtual clusters on a production Grid infrastructure and (ii) elastic multi-tenant VMs that run Docker containers coupled with live migration techniques. The results show that memory oversubscription can be integrated on CMPs to deliver automatic memory management without severely impacting the performance of the VMs. This results in a memory management framework for on-premises Clouds that features live migration to safely enable transient oversubscription of physical resources in a CMP. © 2015 Elsevier B.V. All rights reserved.The authors would like to thank the Spanish "Ministerio de Economia y Competitividad" for the project CLUVIEM (TIN2013-44390-R) and the European Commission for the project INDIGO-DataCloud with grant number 653549.Moltó, G.; Caballer Fernández, M.; Alfonso Laguna, CD. (2016). Automatic memory-based vertical elasticity and oversubscription on cloud platforms. Future Generation Computer Systems. 56:1-10. https://doi.org/10.1016/j.future.2015.10.002S1105

    The monocyte-macrophage axis in the intestine

    Get PDF
    Macrophages are one of the most abundant leucocytes in the intestinal mucosa where they are essential for maintaining homeostasis. However, they are also implicated in the pathogenesis of disorders such as inflammatory bowel disease (IBD), offering potential targets for novel therapies. Here we discuss the function of intestinal monocytes and macrophages during homeostasis and describe how these populations and their functions change during infection and inflammation. Furthermore, we review the current evidence that the intestinal macrophage pool requires continual renewal from circulating blood monocytes, unlike most other tissue macrophages which appear to derive from primitive precursors that subsequently self-renew

    Quantifying Privacy: A Novel Entropy-Based Measure of Disclosure Risk

    Full text link
    It is well recognised that data mining and statistical analysis pose a serious treat to privacy. This is true for financial, medical, criminal and marketing research. Numerous techniques have been proposed to protect privacy, including restriction and data modification. Recently proposed privacy models such as differential privacy and k-anonymity received a lot of attention and for the latter there are now several improvements of the original scheme, each removing some security shortcomings of the previous one. However, the challenge lies in evaluating and comparing privacy provided by various techniques. In this paper we propose a novel entropy based security measure that can be applied to any generalisation, restriction or data modification technique. We use our measure to empirically evaluate and compare a few popular methods, namely query restriction, sampling and noise addition.Comment: 20 pages, 4 figure

    Contextualisation of Data Flow Diagrams for security analysis

    Get PDF
    Data flow diagrams (DFDs) are popular for sketching systems for subsequent threat modelling. Their limited semantics make reasoning about them difficult, but enriching them endangers their simplicity and subsequent ease of take up. We present an approach for reasoning about tainted data flows in design-level DFDs by putting them in context with other complementary usability and requirements models. We illustrate our approach using a pilot study, where tainted data flows were identified without any augmentations to either the DFD or its complementary models
    corecore